Skip to main content
Scour
Browse
Getting Started
Login
Sign Up
You are offline. Trying to reconnect...
Close
Copied to clipboard
Close
Unable to share or copy to clipboard
Close
📝 Text Algorithms
String Matching, Suffix Trees, Edit Distance, Compression
Filter Results
Timeframe
Fresh
Past Hour
Today
This Week
This Month
Feeds to Scour
Subscribed
All
Scoured
160069
posts in
61.9
ms
castnettech/mnemosyne
: LLM context compression and retrieval engine. Zero dependencies. Sub-100ms queries. 40-70% token reduction.
🔄
Subinterpreters
github.com
·
4d
·
r/SideProject
·
…
Fujitsu
One Compression (LLM
Quantization
)
📦
Compression Algorithms
fujitsuresearch.github.io
·
1d
·
Hacker News
·
…
Google Research talks
compression
technology it says will
greatly
reduce memory needed for AI processing
💾
Cache-Oblivious Algorithms
networkworld.com
·
28m
·
…
Google's
TurboQuant
saves memory, but won't save us from
DRAM-pricing
hell
🗺️
Region Inference
theregister.com
·
23h
·
…
Autoencoders,
VAEs
&
GANs
Explained Simply
🪜
Recursive Descent
medium.com
·
2d
·
…
Google
TurboQuant
and What It Changes in Language Models
🪜
Recursive Descent
medium.com
·
3d
·
…
Compression.zstd – Compression
compatible
with the
Zstandard
format
📦
Compression Algorithms
docs.python.org
·
3d
·
Hacker News
·
…
Googles
TurboQuant
Changes the Economics of Local AI Inference
🗺️
Region Inference
medium.com
·
4d
·
…
The
Sequence
Radar #832: Last Week in AI:
Compression
, Voice, and Why It All Matters
🏁
Language Benchmarks
thesequence.substack.com
·
4d
·
Substack
·
…
Pure C implementation of the
TurboQuant
paper (
ICLR
2026) for KV cache compression in LLM inference.
🗺️
Region Inference
github.com
·
1d
·
r/LocalLLaMA
·
…
Archive Format Guide 2024: ZIP vs
7Z
vs RAR vs TAR vs
GZIP
- Complete Compression Comparison
📦
Compression Algorithms
luxa.org
·
3d
·
…
I Read a
Gzip
Decompressor
Written in 250 Lines of Rust — and Compression Finally Made Sense
📦
Compression Algorithms
medium.com
·
5d
·
…
TIL:
Quantisation
∀
Quantified Types
anup.io
·
5d
·
…
Google's New AI
Compression
Could Help Lower
RAM
Prices - Here's How
⚡
Cache-Aware Algorithms
bgr.com
·
5d
·
…
Google research cuts LLM memory use by 6x
📊
Memory Profilers
kite.kagi.com
·
6d
·
…
jhammant/Turbo1bit
:
Turbo1Bit
: Combining 1-bit LLM weights (Bonsai) with TurboQuant KV cache compression for maximum inference efficiency. 4.2x KV cache compression + 16x weight compression = ~10x total memory reduction.
🗺️
Region Inference
github.com
·
39m
·
Hacker News
·
…
Will Google's
TurboQuant
AI Compression Finally
Demolish
the AI Memory Wall?
💾
Cache-Oblivious Algorithms
buysellram.com
·
6d
·
Hacker News
·
…
MartinCrespoC/QuantumLeap
---Llama.cpp-TurboQuant: 🚀 Run any LLM on any hardware. 130% faster MoE inference with ExpertFlow + TurboQuant KV compression. Ollama-compatible API. Built on llama.cpp.
📊
perf Tools
github.com
·
2d
·
Hacker News
·
…
mmgehlot/bitpolar
:
BitPolar
: near-optimal vector quantization — 3-8 bit compression with zero training. 58 integrations across every major AI framework.
🎯
Bit Vectors
github.com
·
3d
·
Hacker News
·
…
yasha1971-coder/aceapex
: compression, lossless, lz77, zstd, cplusplus, performance
📦
Compression Algorithms
github.com
·
6d
·
Hacker News
·
…
Loading...
Loading more...
Page 2 »
Keyboard Shortcuts
Navigation
Next / previous item
j
/
k
Open post
o
or
Enter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
g
h
Interests
g
i
Feeds
g
f
Likes
g
l
History
g
y
Changelog
g
c
Settings
g
s
Browse
g
b
Search
/
Pagination
Next page
n
Previous page
p
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc
Press
?
anytime to show this help